Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated deviation check to other ontologies using ABECTO #525

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

jmkeil
Copy link
Contributor

@jmkeil jmkeil commented Jul 19, 2022

Hi. This adds a configuration to automatically compare your ontology with other unit ontologies using ABECTO after each commit and to return a table of found deviations. Maybe you are interested to use this.

@jmkeil
Copy link
Contributor Author

jmkeil commented Jul 20, 2022

I wasn't sure where to put the comparison configuration. For some reason it doesn't work if I put it into the workflow folder as I did in HajoRijgersberg/OM#69 (analogous PR for OM).

@jmkeil
Copy link
Contributor Author

jmkeil commented Jul 20, 2022

On https://github.com/jmkeil/qudt-public-repo/actions/runs/2698908908 you can find a preview how the results will look like. The job is marked as failed, because deviations have been found. Deviations can be handled by

Alternatively, the parameter --failOnDeviation in c904f9b#diff-311f6477bc95af7ccedbe3225a241452ddb76b61cea146f6103c84a5059d6fb8R12 can be removed. Then the job will only fail, if the comparison could not be executed for any reason.

@jhodgesatmb
Copy link
Collaborator

jhodgesatmb commented Jul 20, 2022 via email

@dr-shorthair
Copy link
Contributor

I'm more optimistic. There are different emphases coming from the different projects, so the ontologies choose different pathways through the problem. Nevertheless the results should be equivalent.

The tool merely highlights inconsistencies, which provides an opportunity for further investigation.
Sometimes it will find an error that you had missed, which you can then correct.

It is informative, and I don't see the harm in running it automatically.

@jmkeil
Copy link
Contributor Author

jmkeil commented Jul 22, 2022

I had a look into the documentation and it seems possible to configure GitHub actions for only manually execution. In my opinion, automated execution would be better, as manual executions are more likely to not happen at all. At least in software testing, this is best practice. But, of course, this is your project and your decision. What are your preferences?

We will place the comparison [configuration results] into a folder that isn't included in the release.

Originally posted by @jhodgesatmb in #526 (comment)

What would be the preferred path of the comparison configuration (not results - these do not get committed into the repository)?

The problem as I see it is that there is no baseline ontology that is accepted as the authoritative standard.

@jhodgesatmb: We had this discussion earlier. My answer is still the same. (And I don't have a problem with sticking to our different viewpoints.)

The tool merely highlights inconsistencies, which provides an opportunity for further investigation.
Sometimes it will find an error that you had missed, which you can then correct.

That is the idea behind the tool.

@jmkeil
Copy link
Contributor Author

jmkeil commented Aug 22, 2022

I just updated the comparison configuration, revealing further deviations.

@jmkeil
Copy link
Contributor Author

jmkeil commented Sep 7, 2022

The last update removes deviations obviously caused by OM or SWEET from the result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants